Generative models have been very successful over the years and have received significant attention for synthetic data generation. As deep learning models are getting more and more complex, they require large amounts of data to perform accurately. In medical image analysis, such generative models play a crucial role as the available data is limited due to challenges related to data privacy, lack of data diversity, or uneven data distributions. In this paper, we present a method to generate brain tumor MRI images using generative adversarial networks. We have utilized StyleGAN2 with ADA methodology to generate high-quality brain MRI with tumors while using a significantly smaller amount of training data when compared to the existing approaches. We use three pre-trained models for transfer learning. Results demonstrate that the proposed method can learn the distributions of brain tumors. Furthermore, the model can generate high-quality synthetic brain MRI with a tumor that can limit the small sample size issues. The approach can addresses the limited data availability by generating realistic-looking brain MRI with tumors. The code is available at: ~\url{https://github.com/rizwanqureshi123/Brain-Tumor-Synthetic-Data}.
translated by 谷歌翻译
Time-critical control applications typically pose stringent connectivity requirements for communication networks. The imperfections associated with the wireless medium such as packet losses, synchronization errors, and varying delays have a detrimental effect on performance of real-time control, often with safety implications. This paper introduces multi-service edge-intelligence as a new paradigm for realizing time-critical control over wireless. It presents the concept of multi-service edge-intelligence which revolves around tight integration of wireless access, edge-computing and machine learning techniques, in order to provide stability guarantees under wireless imperfections. The paper articulates some of the key system design aspects of multi-service edge-intelligence. It also presents a temporal-adaptive prediction technique to cope with dynamically changing wireless environments. It provides performance results in a robotic teleoperation scenario. Finally, it discusses some open research and design challenges for multi-service edge-intelligence.
translated by 谷歌翻译
Low-field (LF) MRI scanners have the power to revolutionize medical imaging by providing a portable and cheaper alternative to high-field MRI scanners. However, such scanners are usually significantly noisier and lower quality than their high-field counterparts. The aim of this paper is to improve the SNR and overall image quality of low-field MRI scans to improve diagnostic capability. To address this issue, we propose a Nested U-Net neural network architecture super-resolution algorithm that outperforms previously suggested deep learning methods with an average PSNR of 78.83 and SSIM of 0.9551. We tested our network on artificial noisy downsampled synthetic data from a major T1 weighted MRI image dataset called the T1-mix dataset. One board-certified radiologist scored 25 images on the Likert scale (1-5) assessing overall image quality, anatomical structure, and diagnostic confidence across our architecture and other published works (SR DenseNet, Generator Block, SRCNN, etc.). We also introduce a new type of loss function called natural log mean squared error (NLMSE). In conclusion, we present a more accurate deep learning method for single image super-resolution applied to synthetic low-field MRI via a Nested U-Net architecture.
translated by 谷歌翻译
Vascular shunt insertion is a fundamental surgical procedure used to temporarily restore blood flow to tissues. It is often performed in the field after major trauma. We formulate a problem of automated vascular shunt insertion and propose a pipeline to perform Automated Vascular Shunt Insertion (AVSI) using a da Vinci Research Kit. The pipeline uses a learned visual model to estimate the locus of the vessel rim, plans a grasp on the rim, and moves to grasp at that point. The first robot gripper then pulls the rim to stretch open the vessel with a dilation motion. The second robot gripper then proceeds to insert a shunt into the vessel phantom (a model of the blood vessel) with a chamfer tilt followed by a screw motion. Results suggest that AVSI achieves a high success rate even with tight tolerances and varying vessel orientations up to 30{\deg}. Supplementary material, dataset, videos, and visualizations can be found at https://sites.google.com/berkeley.edu/autolab-avsi.
translated by 谷歌翻译
通过自动化的学习,以改进的智能城市应用程序的自动化学习来加速和增强数据。在物联网(IoT)生态系统的背景下,数据通信通常是昂贵,效率低下,不可扩展并且缺乏安全性。联合学习(FL)在提供隐私和沟通有效的机器学习(ML)框架方面起着关键作用。在本文中,我们评估了在智能城市街道灯光监控应用程序中FL的可行性。针对Lampposts操作的分类任务的集中式和(完全)个性化的机器学习技术的基准评估FL。在这种情况下合并FL显示出对分类任务的绩效最小的降低,但沟通成本和保留性保留的巨大改善。这些结果增强了FL的生存能力和物联网应用的潜力。
translated by 谷歌翻译
在现代资本市场中,由于各种社会,财务,政治和其他动态因素,股票的价格通常被认为是高度波动和不可预测的。借助计算和周到的投资,股票市场可以通过最少的资本投资来确保可观的利润,而错误的预测可以轻松地为投资者带来灾难性的财务损失。本文介绍了最近引入的机器学习模型 - 变压器模型的应用,以预测孟加拉国领先的证券交易所达卡证券交易所(DSE)的未来价格。变压器模型已被广泛用于自然语言处理和计算机视觉任务,但据我们所知,从未在DSE进行股票价格预测任务。最近,介绍了代表时间序列功能的Time2VEC编码,使得可以采用变压器模型进行股票价格预测。本文集中于基于变压器的模型的应用,以根据其历史和每周的数据来预测DSE中列出的八个特定股票的价格转移。我们的实验证明了大多数股票的有希望的结果和可接受的根平方误差。
translated by 谷歌翻译
连接设备的快速增长导致了新型网络安全威胁的扩散,称为零日攻击。传统的基于行为的ID依靠DNN来检测这些攻击。用于训练DNN的数据集的质量在检测性能中起着至关重要的作用,而代表性不足的样品导致性能不佳。在本文中,我们开发和评估DBN在连接设备网络中检测网络攻击方面的性能。CICIDS2017数据集用于训练和评估我们提出的DBN方法的性能。应用和评估了几种类平衡技术。最后,我们将方法与常规的MLP模型和现有的最新方法进行比较。我们提出的DBN方法显示出竞争性和有希望的结果,并且在培训数据集中攻击不足的攻击中的检测方面有显着改善。
translated by 谷歌翻译
神经代码智能(CI)模型是不透明的黑盒,几乎没有关于他们在预测中使用的功能的见解。这种不透明度可能会导致他们的预测不信任,并阻碍其在安全至关重要的应用中的广泛采用。最近,已经提出了输入程序减少技术来识别输入程序中的关键功能,以提高CI模型的透明度。但是,这种方法是语法 - 乌纳威,不考虑编程语言的语法。在本文中,我们采用了语法引导的减少技术,该技术在减少过程中考虑了输入程序的语法。我们对不同类型输入程序的多个模型进行的实验表明,语法引导的减少技术更快,并且在简化程序中提供了较小的关键令牌集。我们还表明,关键令牌可用于生成对抗性示例,最多可用于65%的输入程序。
translated by 谷歌翻译
机器人外科助理(RSAs)通常用于通过专家外科医生进行微创手术。然而,长期以来充满了乏味和重复的任务,如缝合可以导致外科医生疲劳,激励缝合的自动化。随着薄反射针的视觉跟踪极具挑战性,在未反射对比涂料的情况下修改了针。作为朝向无修改针的缝合子任务自动化的步骤,我们提出了休斯顿:切换未经修改,外科手术,工具障碍针,一个问题和算法,它使用学习的主动传感策略与立体声相机本地化并对齐针头进入另一臂的可见和可访问的姿势。为了补偿机器人定位和针头感知误差,然后算法执行使用多个摄像机的高精度抓握运动。在使用Da Vinci研究套件(DVRK)的物理实验中,休斯顿成功通过了96.7%的成功率,并且能够在故障前平均地在臂32.4倍之间顺序地执行切换。在培训中看不见的针头,休斯顿实现了75-92.9%的成功率。据我们所知,这项工作是第一个研究未修改的手术针的切换。查看https://tinyurl.com/huston-surgery用于额外​​的材料。
translated by 谷歌翻译
深层神经网络(DNN)越来越多地用于软件工程和代码智能任务。这些是强大的工具,能够通过数百万参数从大型数据集中学习高度概括的模式。同时,它们的大容量可以使他们容易记住数据点。最近的工作表明,当训练数据集嘈杂,涉及许多模棱两可或可疑的样本时,记忆风险特别强烈表现出来,而记忆是唯一的追索权。本文的目的是评估和比较神经代码智能模型中的记忆和概括程度。它旨在提供有关记忆如何影响神经模型在代码智能系统中的学习行为的见解。为了观察模型中的记忆程度,我们为原始训练数据集增加了随机噪声,并使用各种指标来量化噪声对训练和测试各个方面的影响。我们根据Java,Python和Ruby Codebase评估了几种最先进的神经代码智能模型和基准。我们的结果突出了重要的风险:数百万可训练的参数允许神经网络记住任何包括嘈杂数据,并提供错误的概括感。我们观察到所有模型都表现出某些形式的记忆。在大多数代码智能任务中,这可能会很麻烦,因为它们依赖于相当容易发生噪声和重复性数据源,例如GitHub的代码。据我们所知,我们提供了第一个研究,以量化软件工程和代码智能系统领域的记忆效应。这项工作提高了人们的意识,并为训练神经模型的重要问题提供了新的见解,这些问题通常被软件工程研究人员忽略。
translated by 谷歌翻译